Goto

Collaborating Authors

 gp model



Stationary Activations for Uncertainty Calibration in Deep Learning

Neural Information Processing Systems

We introduce a new family of non-linear neural network activation functions that mimic the properties induced by the widely-used Mat\'ern family of kernels in Gaussian process (GP) models.


Active Learning of Fractional-Order Viscoelastic Model Parameters for Realistic Haptic Rendering

Tolasa, Harun, Gemalmaz, Gorkem, Patoglu, Volkan

arXiv.org Artificial Intelligence

Fractional-order models provide an effective means of describing intrinsically time-dependent viscoelastic dynamics with few parameters, as these models can naturally capture memory effects. However, due to the unintuitive frequency-dependent coupling between the order of the fractional element and the other parameters, determining appropriate parameters for fractional-order models that yield high perceived realism remains a significant challenge. In this study, we propose a systematic means of determining the parameters of fractional-order viscoelastic models that optimizes the perceived realism of haptic rendering across general populations. First, we demonstrate that the parameters of fractional-order models can be effectively optimized through active learning, via qualitative feedback-based human-in-the-loop (HiL) optimizations, to ensure consistently high realism ratings for each individual. Second, we propose a rigorous method to combine HiL optimization results to form an aggregate perceptual map trained on the entire dataset and demonstrate the selection of population-level optimal parameters from this representation that are broadly perceived as realistic across general populations. Finally, we provide evidence of the effectiveness of the generalized fractional-order viscoelastic model parameters by characterizing their perceived realism through human-subject experiments. Overall, generalized fractional-order viscoelastic models established through the proposed HiL optimization and aggregation approach possess the potential to significantly improve the sim-to-real transition performance of medical training simulators. Index T erms--Viscoelastic materials, fractional-order standard linear solid model, haptic rendering, human-in-the-loop optimization, perceived realism, and medical training simulators.


Can Synthetic Data Improve Symbolic Regression Extrapolation Performance?

Ramlan, Fitria Wulandari, O'Riordan, Colm, Kronberger, Gabriel, McDermott, James

arXiv.org Artificial Intelligence

Many machine learning models perform well when making predictions within the training data range, but often struggle when required to extrapolate beyond it. Symbolic regression (SR) using genetic programming (GP) can generate flexible models but is prone to unreliable behaviour in extrapolation. This paper investigates whether adding synthetic data can help improve performance in such cases. We apply Kernel Density Estimation (KDE) to identify regions in the input space where the training data is sparse. Synthetic data is then generated in those regions using a knowledge distillation approach: a teacher model generates predictions on new input points, which are then used to train a student model. We evaluate this method across six benchmark datasets, using neural networks (NN), random forests (RF), and GP both as teacher models (to generate synthetic data) and as student models (trained on the augmented data). Results show that GP models can often improve when trained on synthetic data, especially in extrapolation areas. However, the improvement depends on the dataset and teacher model used. The most important improvements are observed when synthetic data from GPe is used to train GPp in extrapolation regions. Changes in interpolation areas show only slight changes. We also observe heterogeneous errors, where model performance varies across different regions of the input space. Overall, this approach offers a practical solution for better extrapolation. Note: An earlier version of this work appeared in the GECCO 2025 Workshop on Symbolic Regression. This arXiv version corrects several parts of the original submission.



Beyond Means: A Dynamic Framework for Predicting Customer Satisfaction

Naumzik, Christof, Maarouf, Abdurahman, Feuerriegel, Stefan, Weinmann, Markus

arXiv.org Artificial Intelligence

Online ratings influence customer decision-making, yet standard aggregation methods, such as the sample mean, fail to adapt to quality changes over time and ignore review heterogeneity (e.g., review sentiment, a review's helpfulness). To address these challenges, we demonstrate the value of using the Gaussian process (GP) framework for rating aggregation. Specifically, we present a tailored GP model that captures the dynamics of ratings over time while additionally accounting for review heterogeneity. Based on 121,123 ratings from Yelp, we compare the predictive power of different rating aggregation methods in predicting future ratings, thereby finding that the GP model is considerably more accurate and reduces the mean absolute error by 10.2% compared to the sample mean. Our findings have important implications for marketing practitioners and customers. By moving beyond means, designers of online reputation systems can display more informative and adaptive aggregated rating scores that are accurate signals of expected customer satisfaction.




Supplementary Material: " Optimal Order Simple Regret for Gaussian Process Bandits "

Neural Information Processing Systems

Mercer's representation theorem indicates that The proof of Proposition 1 uses the following lemma. Lemma 1 F or a positive definite kernel k and its corresponding RKHS, the following holds. For a proof, see [1, Lemma 3.9 ]. Expanding the RKHS norm in the right hand side through an algebraic manipulation, we get null null null null null null null null k (.,x) The first equation uses the reproducing property of the RKHS. We now move to the proof of Theorem 2. For the simplicity of the notation let us use τ = nullZ The MVR algorithm selects the points with the highest variance first.